-
Notifications
You must be signed in to change notification settings - Fork 2.1k
docs: High Availability Setup documentation #2715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
/assign |
README.md
Outdated
@@ -304,6 +305,12 @@ spec: | |||
|
|||
Other metrics can be sharded via [Horizontal sharding](#horizontal-sharding). | |||
|
|||
### High Availability | |||
|
|||
For high availability, run multiple kube-state-metrics replicas with anti-affinity rules to prevent single points of failure. Configure 2 replicas, anti-affinity rules on hostname, rolling update strategy with `maxUnavailable: 1`, and a PodDisruptionBudget with `minAvailable: 1`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the value-add specifically for ksm here? I believe this would apply to most Kubernetes deployments if you want to run them highly available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll update the documentation to explain that users must perform query-time deduplication
README.md
Outdated
|
||
For high availability, run multiple kube-state-metrics replicas with anti-affinity rules to prevent single points of failure. Configure 2 replicas, anti-affinity rules on hostname, rolling update strategy with `maxUnavailable: 1`, and a PodDisruptionBudget with `minAvailable: 1`. | ||
|
||
When using multiple replicas, Prometheus will scrape all instances resulting in duplicate metrics with different instance labels. Handle deduplication in queries using `avg without(instance) (metric_name)`. Brief inconsistencies may occur during state transitions but resolve quickly as replicas sync with the API server. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This depends on how you scrape the metrics with prometheus, on a pod level or a service level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah right, I'll update the section to clarify the differences.
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Rishab87 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@mrueg I've addressed reviews can you have a look again? |
README.md
Outdated
@@ -304,6 +305,12 @@ spec: | |||
|
|||
Other metrics can be sharded via [Horizontal sharding](#horizontal-sharding). | |||
|
|||
### High Availability | |||
|
|||
For high availability, run multiple kube-state-metrics replicas to prevent a single point of failure. A standard setup uses at least 2 replicas, pod anti-affinity rules to ensure they run on different nodes, and a PodDisruptionBudget (PDB) with `minAvailable: 1` to protect against voluntary disruptions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest to add an introductory paragraph:
-
It should mention that multiple replica increase the load on the Kubernetes' API as a trade-off.
-
It should mention that most likely you don't need HA if you scrape every 30s and you can tolerate a few missing scrapes (which usually is the case)
-
Does a "standard" setup exist?
README.md
Outdated
|
||
For high availability, run multiple kube-state-metrics replicas to prevent a single point of failure. A standard setup uses at least 2 replicas, pod anti-affinity rules to ensure they run on different nodes, and a PodDisruptionBudget (PDB) with `minAvailable: 1` to protect against voluntary disruptions. | ||
|
||
When scraping the individual pods directly in an HA setup, Prometheus will ingest duplicate metrics distinguished only by the instance label. This requires you to deduplicate the data in your queries, for example, by using `max without(instance) (your_metric)`. The correct aggregation function (max, sum, avg, etc.) is important and depends on the metric type, as using the wrong one can produce incorrect values for timestamps or during brief state transitions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is Pod Scraping that common? I would assume most folks will scrape at the service level via a ServiceMonitor / Prometheus-Operator or similar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah right
664072e
to
68f5801
Compare
@mrueg made the changes can you please re-review? |
What this PR does / why we need it:
Adds documentation for high availability setups
How does this change affect the cardinality of KSM: (increases, decreases or does not change cardinality)
does not change cardinality
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #2081